Accurate segmentation of power lines in aerial images is essential to ensure the flight safety of aerial vehicles. Acquiring high-quality ground truth annotations for training a deep learning model is a laborious process. Therefore, developing algorithms that can leverage knowledge from labelled synthetic data to unlabelled real images is highly demanded. This process is studied in Unsupervised domain adaptation (UDA). Recent approaches to self-training have achieved remarkable performance in UDA for semantic segmentation, which trains a model with pseudo labels on the target domain. However, the pseudo labels are noisy due to a discrepancy in the two data distributions. We identify that context dependency is important for bridging this domain gap. Motivated by this, we propose QuadFormer, a novel framework designed for domain adaptive semantic segmentation. The hierarchical quadruple transformer combines cross-attention and self-attention mechanisms to adapt transferable context. Based on cross-attentive and self-attentive feature representations, we introduce a pseudo label correction scheme to online denoise the pseudo labels and reduce the domain gap. Additionally, we present two datasets - ARPLSyn and ARPLReal to further advance research in unsupervised domain adaptive powerline segmentation. Finally, experimental results indicate that our method achieves state-of-the-art performance for the domain adaptive power line segmentation on ARPLSyn$\rightarrow$TTTPLA and ARPLSyn$\rightarrow$ARPLReal.
translated by 谷歌翻译
在嘈杂标记的数据上进行强大的学习是实际应用中的重要任务,因为标签噪声直接导致深度学习模型的概括不良。现有的标签噪声学习方法通​​常假定培训数据的基础类别是平衡的。但是,现实世界中的数据通常是不平衡的,导致观察到的与标签噪声引起的固有类别分布之间的不一致。分布不一致使标签 - 噪声学习的问题更具挑战性,因为很难将干净的样本与内在尾巴类别的嘈杂样本区分开来。在本文中,我们提出了一个学习框架,用于使用内在长尾数据进行标签 - 噪声学习。具体而言,我们提出了一种称为两阶段双维样品选择(TBS)的可靠样品选择方法,以更好地与嘈杂的样品分开清洁样品,尤其是对于尾巴类别。 TBSS由两个新的分离指标组成,以在每个类别中共同分开样本。对具有内在长尾巴分布的多个嘈杂标记的数据集进行了广泛的实验,证明了我们方法的有效性。
translated by 谷歌翻译
视频识别的标准方法通常在完整的输入视频上运行,由于视频中的时空冗余率广泛,因此效率低下。蒙版视频建模(即视频)的最新进展表明,香草视觉变压器(VIT)仅具有有限的视觉内容来补充时空上下文的能力。受到这一点的启发,我们提出了建议的蒙版动作识别(MAR),该识别(MAR)通过丢弃一定比例的补丁并仅在视频的一部分上操作来减少冗余计算。 MAR包含以下两个必不可少的组件:单元运行掩盖和桥接分类器。具体而言,为了使VIT轻松地感知细节以外的细节,并且会呈现单元格的掩蔽,以保留视频中的时空相关性,从而确保可以在同一空间位置观察到在同一空间位置的贴片,以便轻松地重建。此外,我们注意到,尽管部分观察到的特征可以重建语义上明确的隐形贴片,但它们无法实现准确的分类。为了解决这个问题,提出了一个桥接分类器,以弥合重建的VIT编码功能与专门用于分类的功能之间的语义差距。我们提出的MAR将VIT的计算成本降低了53%,并且广泛的实验表明,MAR始终以明显的边距优于现有的VIT模型。尤其是,我们发现由MAR训练的Vit-Lage胜过由标准培训方案训练的Vit-Bugue,这是通过说服Kinetics-400和某些v2数据集中的利润率,而VIT-LARGE的计算开销仅为14.5%。维特(Vit-Huge)。
translated by 谷歌翻译
缺失数据在真实数据分析中普遍存在,并呈现令人生畏的挑战。虽然在分析全面观察数据的公平方面存在越来越多的文献,但在对不完全数据分析中调查公平性的理论上很少。在实践中,用于处理缺失数据的流行分析方法是仅使用一组完整的情况,即,与全部观察到训练预测算法的所有功能的观察结果。然而,根据缺失的数据机制,完整情况的分布和完整数据的分布可能与基本不同。当目标是在没有缺失值的完整数据域中开发一个公平算法时,在完整的案例域中是公平的算法可以向完整数据域中的一些边缘化组显示不成比例的偏置。为了填补这一重大缺口,我们研究了仅使用完整案例评估任意模型的完整数据域中的公平性的问题。我们在公平估算误差上提供上限和下限,并进行数值实验以评估我们的理论结果。我们的工作在分析不完全数据分析中提供了第一个关于公平保证的认识的理论结果。
translated by 谷歌翻译
在线性回归中,斜率是一种新的凸分析方法,通过分类的L1惩罚推广套索:更大的装配系数更大。这种依赖性正则化需要输入惩罚序列$ \ lambda $,而不是在套索案件中的标量惩罚,从而使设计在计算中非常昂贵。在本文中,我们提出了两个有效的算法来设计可能的高维坡度损失,以便最小化平均平方误差。对于高斯数据矩阵,我们在近似消息传递制度下提出了一个第一个订单投影梯度下降(PGD)。对于一般数据矩阵,我们呈现了一个零阶坐标血统(CD)来设计斜率的子类,称为K级斜率。我们的CD允许在准确性和计算速度之间进行有用的权衡。我们通过对综合数据和现实世界数据集的广泛实验展示了坡度与我们的设计的表现。
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
When using LiDAR semantic segmentation models for safety-critical applications such as autonomous driving, it is essential to understand and improve their robustness with respect to a large range of LiDAR corruptions. In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions. To rigorously evaluate the robustness and generalizability of current approaches, we propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy. Then, we systematically investigate 11 LiDAR semantic segmentation models, especially spanning different input representations (e.g., point clouds, voxels, projected images, and etc.), network architectures and training schemes. Through this study, we obtain two insights: 1) We find out that the input representation plays a crucial role in robustness. Specifically, under specific corruptions, different representations perform variously. 2) Although state-of-the-art methods on LiDAR semantic segmentation achieve promising results on clean data, they are less robust when dealing with noisy data. Finally, based on the above observations, we design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications. It is promising that our benchmark, comprehensive analysis, and observations can boost future research in robust LiDAR semantic segmentation for safety-critical applications.
translated by 谷歌翻译